LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

نویسندگان

چکیده

We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the of black-box adversarial attacks. LGV starts pretrained surrogate model and collects multiple weight sets few additional training epochs with constant high learning rate. exploits two geometric properties that we relate transferability. First, models belong wider optimum are better surrogates. Second, identify subspace able generate an effective ensemble among this optimum. Through extensive experiments, show alone outperforms all (combinations of) four established test-time transformations by 1.8 59.9% points. Our findings shed light on importance geometry space explain examples.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Birkhoff's Theorem from a geometric perspective: A simple example

‎From Hilbert's theorem of zeroes‎, ‎and from Noether's ideal theory‎, ‎Birkhoff derived certain algebraic concepts (as explained by Tholen) that have a dual significance in general toposes‎, ‎similar to their role in the original examples of algebraic geometry‎. ‎I will describe a simple example that illustrates some of the aspects of this relationship‎. The dualization from algebra to geometr...

متن کامل

Fast boosting using adversarial bandits

In this paper we apply multi-armed bandits (MABs) to improve the computational complexity of AdaBoost. AdaBoost constructs a strong classifier in a stepwise fashion by selecting simple base classifiers and using their weighted “vote” to determine the final classification. We model this stepwise base classifier selection as a sequential decision problem, and optimize it with MABs where each arm ...

متن کامل

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because...

متن کامل

birkhoff's theorem from a geometric perspective: a simple example

‎from hilbert's theorem of zeroes‎, ‎and from noether's ideal theory‎, ‎birkhoff derived certain algebraic concepts (as explained by tholen) that have a dual significance in general toposes‎, ‎similar to their role in the original examples of algebraic geometry‎. ‎i will describe a simple example that illustrates some of the aspects of this relationship‎. the dualization from algebra to geometr...

متن کامل

Understanding and Enhancing the Transferability of Adversarial Examples

State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-19772-7_35